对抗性培训(AT)被认为是对抗对抗攻击最可靠的防御之一。然而,模型培训以牺牲标准精度,并不概括为新的攻击。最近的作用表明,在新型威胁模型中的新威胁模型或神经感知威胁模型中,对普遍威胁模型的对抗样本进行了泛化改进。然而,前者需要确切的流形信息,而后者需要算法放松。通过这些考虑因素,我们利用了具有规范化流的底层歧管信息,确保了确切的歧管的假设保持。此外,我们提出了一种名为联合空间威胁模型(JSTM)的新型威胁模型,其可以作为神经感知威胁模型的特殊情况,这些威胁模型不需要额外放松来制作相应的对抗性攻击。在JSTM下,我们培养了新的对抗性攻击和防御。混合策略提高了神经网络的标准准确性,但与AT结合时牺牲了鲁棒性。为了解决这个问题,我们提出了强大的混合策略,其中我们最大限度地提高了内插图像的逆境,并获得了鲁棒性和预装配。我们的实验表明,内插关节空间对抗性训练(IJSAT)在CiFar-10/100,Om-ImageNet和CiFar-10-C数据集中实现了标准精度,鲁棒性和泛化的良好性能。 IJSAT也是灵活的,可以用作数据增强方法,以提高标准精度,并与诸多换取以提高鲁棒性的方法相结合。
translated by 谷歌翻译
最近的研究表明,对对抗性攻击的鲁棒性可以跨网络转移。换句话说,在强大的教师模型的帮助下,我们可以使模型更加强大。我们问是否从静态教师那里学习,可以模特“学习”和“互相教导”来实现更好的稳健性?在本文中,我们研究模型之间的相互作用如何通过知识蒸馏来影响鲁棒性。我们提出了互联土训练(垫子),其中多种模型一起培训并分享对抗性示例的知识,以实现改善的鲁棒性。垫允许强大的模型来探索更大的对抗样本空间,并找到更强大的特征空间和决策边界。通过对CIFAR-10和CIFAR-100的广泛实验,我们证明垫可以在白盒攻击下有效地改善模型稳健性和最优异的现有方法,使$ \ SIM为8%的准确性增益对香草对抗培训(在PGD-100袭击下。此外,我们表明垫子还可以在不同的扰动类型中减轻鲁棒性权衡,从$ l_ \ infty $,$ l_2 $和$ l_1 $攻击中带来基线的基线。这些结果表明了该方法的优越性,并证明协作学习是设计强大模型的有效策略。
translated by 谷歌翻译
对象检测在许多安全关键系统中播放关键作用。对抗性补丁攻击,在物理世界中易于实施,对最先进的对象探测器构成严重威胁。开发针对补丁攻击的对象探测器的可靠防御是至关重要的,但严重解读。在本文中,我们提出了段和完整的防御(SAC),是通过检测和消除对抗性补丁来保护对象探测器的一般框架。我们首先培训一个补丁分段器,输出补丁掩码,提供对抗性补丁的像素级定位。然后,我们提出了一种自我逆势训练算法来强制补丁分段器。此外,我们设计了一种坚固的形状完成算法,保证了给定贴片分段器的输出在地面真理贴片掩模的某个汉明距离的图像中从图像中移除整个修补程序。我们对Coco和Xview Datasets的实验表明,即使在具有清洁图像上没有性能下降的强大自适应攻击下,SAC也可以实现优越的稳健性,并且概括到未遵守的补丁形状,攻击预算和看不见的攻击方法。此外,我们介绍了股份模型数据集,该数据集增强了具有对抗修补程序的像素级注释的杏子数据集。我们展示SAC可以显着降低物理补丁攻击的目标攻击成功率。
translated by 谷歌翻译
New architecture GPUs like A100 are now equipped with multi-instance GPU (MIG) technology, which allows the GPU to be partitioned into multiple small, isolated instances. This technology provides more flexibility for users to support both deep learning training and inference workloads, but efficiently utilizing it can still be challenging. The vision of this paper is to provide a more comprehensive and practical benchmark study for MIG in order to eliminate the need for tedious manual benchmarking and tuning efforts. To achieve this vision, the paper presents MIGPerf, an open-source tool that streamlines the benchmark study for MIG. Using MIGPerf, the authors conduct a series of experiments, including deep learning training and inference characterization on MIG, GPU sharing characterization, and framework compatibility with MIG. The results of these experiments provide new insights and guidance for users to effectively employ MIG, and lay the foundation for further research on the orchestration of hybrid training and inference workloads on MIGs. The code and results are released on https://github.com/MLSysOps/MIGProfiler. This work is still in progress and more results will be published soon.
translated by 谷歌翻译
Function approximation (FA) has been a critical component in solving large zero-sum games. Yet, little attention has been given towards FA in solving \textit{general-sum} extensive-form games, despite them being widely regarded as being computationally more challenging than their fully competitive or cooperative counterparts. A key challenge is that for many equilibria in general-sum games, no simple analogue to the state value function used in Markov Decision Processes and zero-sum games exists. In this paper, we propose learning the \textit{Enforceable Payoff Frontier} (EPF) -- a generalization of the state value function for general-sum games. We approximate the optimal \textit{Stackelberg extensive-form correlated equilibrium} by representing EPFs with neural networks and training them by using appropriate backup operations and loss functions. This is the first method that applies FA to the Stackelberg setting, allowing us to scale to much larger games while still enjoying performance guarantees based on FA error. Additionally, our proposed method guarantees incentive compatibility and is easy to evaluate without having to depend on self-play or approximate best-response oracles.
translated by 谷歌翻译
Correlated Equilibrium is a solution concept that is more general than Nash Equilibrium (NE) and can lead to outcomes with better social welfare. However, its natural extension to the sequential setting, the \textit{Extensive Form Correlated Equilibrium} (EFCE), requires a quadratic amount of space to solve, even in restricted settings without randomness in nature. To alleviate these concerns, we apply \textit{subgame resolving}, a technique extremely successful in finding NE in zero-sum games to solving general-sum EFCEs. Subgame resolving refines a correlation plan in an \textit{online} manner: instead of solving for the full game upfront, it only solves for strategies in subgames that are reached in actual play, resulting in significant computational gains. In this paper, we (i) lay out the foundations to quantify the quality of a refined strategy, in terms of the \textit{social welfare} and \textit{exploitability} of correlation plans, (ii) show that EFCEs possess a sufficient amount of independence between subgames to perform resolving efficiently, and (iii) provide two algorithms for resolving, one using linear programming and the other based on regret minimization. Both methods guarantee \textit{safety}, i.e., they will never be counterproductive. Our methods are the first time an online method has been applied to the correlated, general-sum setting.
translated by 谷歌翻译
In this paper, we study the \underline{R}obust \underline{o}ptimization for \underline{se}quence \underline{Net}worked \underline{s}ubmodular maximization (RoseNets) problem. We interweave the robust optimization with the sequence networked submodular maximization. The elements are connected by a directed acyclic graph and the objective function is not submodular on the elements but on the edges in the graph. Under such networked submodular scenario, the impact of removing an element from a sequence depends both on its position in the sequence and in the network. This makes the existing robust algorithms inapplicable. In this paper, we take the first step to study the RoseNets problem. We design a robust greedy algorithm, which is robust against the removal of an arbitrary subset of the selected elements. The approximation ratio of the algorithm depends both on the number of the removed elements and the network topology. We further conduct experiments on real applications of recommendation and link prediction. The experimental results demonstrate the effectiveness of the proposed algorithm.
translated by 谷歌翻译
Learning with noisy label (LNL) is a classic problem that has been extensively studied for image tasks, but much less for video in the literature. A straightforward migration from images to videos without considering the properties of videos, such as computational cost and redundant information, is not a sound choice. In this paper, we propose two new strategies for video analysis with noisy labels: 1) A lightweight channel selection method dubbed as Channel Truncation for feature-based label noise detection. This method selects the most discriminative channels to split clean and noisy instances in each category; 2) A novel contrastive strategy dubbed as Noise Contrastive Learning, which constructs the relationship between clean and noisy instances to regularize model training. Experiments on three well-known benchmark datasets for video classification show that our proposed tru{\bf N}cat{\bf E}-split-contr{\bf A}s{\bf T} (NEAT) significantly outperforms the existing baselines. By reducing the dimension to 10\% of it, our method achieves over 0.4 noise detection F1-score and 5\% classification accuracy improvement on Mini-Kinetics dataset under severe noise (symmetric-80\%). Thanks to Noise Contrastive Learning, the average classification accuracy improvement on Mini-Kinetics and Sth-Sth-V1 is over 1.6\%.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Conditional normalizing flows can generate diverse image samples for solving inverse problems. Most normalizing flows for inverse problems in imaging employ the conditional affine coupling layer that can generate diverse images quickly. However, unintended severe artifacts are occasionally observed in the output of them. In this work, we address this critical issue by investigating the origins of these artifacts and proposing the conditions to avoid them. First of all, we empirically and theoretically reveal that these problems are caused by ``exploding variance'' in the conditional affine coupling layer for certain out-of-distribution (OOD) conditional inputs. Then, we further validated that the probability of causing erroneous artifacts in pixels is highly correlated with a Mahalanobis distance-based OOD score for inverse problems in imaging. Lastly, based on our investigations, we propose a remark to avoid exploding variance and then based on it, we suggest a simple remedy that substitutes the affine coupling layers with the modified rational quadratic spline coupling layers in normalizing flows, to encourage the robustness of generated image samples. Our experimental results demonstrated that our suggested methods effectively suppressed critical artifacts occurring in normalizing flows for super-resolution space generation and low-light image enhancement without compromising performance.
translated by 谷歌翻译